Anastasiia Koloskova
Optimization Algorithms for Decentralized, Distributed and Collaborative Machine Learning
Research Abstract:
In distributed learning, multiple workers (e.g., GPUs) contribute in parallel to expedite the training of machine learning models. In collaborative learning, the training data is distributed among several participants due to the privacy-sensitive nature of the data, and these participants collaborate to solve a common machine learning task. My research focuses on various challenges encountered in both scenarios, including communication efficiency, data heterogeneity, and privacy protection of the training data. From an optimization theory perspective, we analyze existing widespread algorithms and, based on theoretical findings, propose more efficient training schemes.
Bio:
She is a final year PhD student at École Polytechnique Fédérale de Lausanne (EPFL) under supervision of Prof. Martin Jaggi. Her research is focused on optimization methods for decentralized, distributed and federated learning. She is broadly interested in optimization for machine learning and privacy. She has been the recipient of a Google PhD Fellowship in machine learning.